深度神经网络(DNN)已被广泛使用,并在计算机视觉和自动导航领域起着重要作用。但是,这些DNN在计算上是复杂的,并且在没有其他优化和自定义的情况下,它们在资源受限平台上的部署很困难。在本手稿中,我们描述了DNN体系结构的概述,并提出了降低计算复杂性的方法,以加速培训和推理速度,以使其适合具有低计算资源的边缘计算平台。
translated by 谷歌翻译
具有早期退出机制的最先进的神经网络通常需要大量的培训和微调,以通过低计算成本来实现良好的性能。我们提出了一种新颖的早期出口技术,基于样本的类手段,提前出口课程(E $^2 $ cm)。与大多数现有方案不同,E $^2 $ cm不需要基于梯度的内部分类器培训,并且不会通过任何方式修改基本网络。这使其对于低功率设备的神经网络培训特别有用,如无线边缘网络。我们评估了E $^2 $ cm的性能和间接费用,例如MobileNetV3,EdgisterNet,Resnet和数据集,例如CIFAR-100,Imagenet和KMNIST。我们的结果表明,鉴于固定的培训时间预算,与现有的早期退出机制相比,E $^2 $ cm的准确性更高。此外,如果培训时间预算没有限制,则可以将E $^2 $ cm与现有的早期退出计划相结合,以提高后者的性能,从而在计算成本和网络准确性之间取得更好的权衡。我们还表明,E $^2 $ cm可用于降低无监督学习任务中的计算成本。
translated by 谷歌翻译
我们考虑主人想要在$ n $ Workers上运行分布式随机梯度下降(SGD)算法的设置,每个算法都有一个数据子集。分布式SGD可能会遭受散乱者的影响,即导致延迟的缓慢或反应迟钝的工人。文献中研究的一种解决方案是在更新模型之前等待每次迭代的最快$ k <n $工人的响应,其中$ k $是固定的参数。 $ k $的价值的选择提供了SGD的运行时(即收敛率)与模型错误之间的权衡。为了优化误差折衷,我们研究了在整个算法的运行时,以自适应〜$ k $(即不同的$ k $)调查分布式SGD。我们首先设计了一种自适应策略,用于改变$ k $,该策略根据我们得出的墙壁通行时间的函数,基于上限的上限来优化这种权衡。然后,我们建议并实施一种基于统计启发式的自适应分布式SGD的算法。我们的结果表明,与非自适应实现相比,分布式SGD的自适应版本可以在更少的时间内达到较低的误差值。此外,结果还表明,自适应版本是沟通效率的,其中主人与工人之间所需的通信量小于非自适应版本的沟通量。
translated by 谷歌翻译
我们的商品设备中的大量传感器为传感器融合的跟踪提供了丰富的基板。然而,当今的解决方案无法在实用的日常环境中提供多个代理商的强大和高跟踪精度,这是沉浸式和协作应用程序未来的核心。这可以归因于这些融合解决方案利用多样性的有限范围,从而阻止它们迎合准确性,鲁棒性(不同的环境条件)和可伸缩性(多个试剂)的多个维度。在这项工作中,我们通过将双层多样性的概念引入多代理跟踪中的传感器融合问题来朝着这一目标迈出重要的一步。我们证明,互补跟踪方式的融合,被动/亲戚(例如,视觉探测法)和主动/绝对跟踪(例如,基础架构辅助的RF定位)提供了一个关键的多样性第一层,可带来可伸缩性,而第二层的多样性则是多样性的。在于融合的方法论,我们将算法(鲁棒性)和数据驱动(用于准确性)方法汇集在一起​​。 Rovar是这种双层多样性方法的实施例,使用算法和数据驱动技术智能地参与跨模式信息,共同承担着准确跟踪野外多种代理的负担。广泛的评估揭示了Rovar在跟踪准确性(中位数),鲁棒性(在看不见的环境中),轻重量(在移动平台上实时运行,例如Jetson Nano/tx2),以启用实用的多功能多多数,以启用实用的多功能,以实用代理在日常环境中的沉浸式应用。
translated by 谷歌翻译
已经提出了高效和自适应计算机视觉系统以使计算机视觉任务,例如图像分类和对象检测,针对嵌入或移动设备进行了优化。这些解决方案最近的起源,专注于通过设计具有近似旋钮的自适应系统来优化模型(深神经网络,DNN)或系统。尽管最近的几项努力,但我们表明现有解决方案遭受了两个主要缺点。首先,系统不考虑模型的能量消耗,同时在制定要运行的模型的决定时。其次,由于其他共同居民工作负载,评估不考虑设备上的争用的实际情况。在这项工作中,我们提出了一种高效和自适应的视频对象检测系统,这是联合优化的精度,能量效率和延迟。底层Virtuoso是一个多分支执行内核,它能够在精度 - 能量 - 延迟轴上的不同运行点处运行,以及轻量级运行时调度程序,以选择最佳的执行分支以满足用户要求。要与Virtuoso相当比较,我们基准于15件最先进的或广泛使用的协议,包括更快的R-CNN(FRCNN),YOLO V3,SSD,培训台,SELSA,MEGA,REPP,FastAdapt和我们的内部FRCNN +,YOLO +,SSD +和高效+(我们的变体具有增强的手机效率)的自适应变体。通过这种全面的基准,Virtuoso对所有上述协议显示出优势,在NVIDIA Jetson Mobile GPU上的每一项效率水平上引领精度边界。具体而言,Virtuoso的准确性为63.9%,比一些流行的物体检测模型高于10%,51.1%,yolo为49.5%。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
A Digital Twin (DT) is a simulation of a physical system that provides information to make decisions that add economic, social or commercial value. The behaviour of a physical system changes over time, a DT must therefore be continually updated with data from the physical systems to reflect its changing behaviour. For resource-constrained systems, updating a DT is non-trivial because of challenges such as on-board learning and the off-board data transfer. This paper presents a framework for updating data-driven DTs of resource-constrained systems geared towards system health monitoring. The proposed solution consists of: (1) an on-board system running a light-weight DT allowing the prioritisation and parsimonious transfer of data generated by the physical system; and (2) off-board robust updating of the DT and detection of anomalous behaviours. Two case studies are considered using a production gas turbine engine system to demonstrate the digital representation accuracy for real-world, time-varying physical systems.
translated by 谷歌翻译
We consider infinite horizon Markov decision processes (MDPs) with fast-slow structure, meaning that certain parts of the state space move "fast" (and in a sense, are more influential) while other parts transition more "slowly." Such structure is common in real-world problems where sequential decisions need to be made at high frequencies, yet information that varies at a slower timescale also influences the optimal policy. Examples include: (1) service allocation for a multi-class queue with (slowly varying) stochastic costs, (2) a restless multi-armed bandit with an environmental state, and (3) energy demand response, where both day-ahead and real-time prices play a role in the firm's revenue. Models that fully capture these problems often result in MDPs with large state spaces and large effective time horizons (due to frequent decisions), rendering them computationally intractable. We propose an approximate dynamic programming algorithmic framework based on the idea of "freezing" the slow states, solving a set of simpler finite-horizon MDPs (the lower-level MDPs), and applying value iteration (VI) to an auxiliary MDP that transitions on a slower timescale (the upper-level MDP). We also extend the technique to a function approximation setting, where a feature-based linear architecture is used. On the theoretical side, we analyze the regret incurred by each variant of our frozen-state approach. Finally, we give empirical evidence that the frozen-state approach generates effective policies using just a fraction of the computational cost, while illustrating that simply omitting slow states from the decision modeling is often not a viable heuristic.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Machine learning is the dominant approach to artificial intelligence, through which computers learn from data and experience. In the framework of supervised learning, for a computer to learn from data accurately and efficiently, some auxiliary information about the data distribution and target function should be provided to it through the learning model. This notion of auxiliary information relates to the concept of regularization in statistical learning theory. A common feature among real-world datasets is that data domains are multiscale and target functions are well-behaved and smooth. In this paper, we propose a learning model that exploits this multiscale data structure and discuss its statistical and computational benefits. The hierarchical learning model is inspired by the logical and progressive easy-to-hard learning mechanism of human beings and has interpretable levels. The model apportions computational resources according to the complexity of data instances and target functions. This property can have multiple benefits, including higher inference speed and computational savings in training a model for many users or when training is interrupted. We provide a statistical analysis of the learning mechanism using multiscale entropies and show that it can yield significantly stronger guarantees than uniform convergence bounds.
translated by 谷歌翻译